68 research outputs found

    Automating Inspection of Tunnels With Photogrammetry and Deep Learning

    Get PDF
    Asset Management of large underground transportation infrastructure requires frequent and detailed inspections to assess its overall structural conditions and to focus available funds where required. At the time of writing, the common approach to perform visual inspections is heavily manual, therefore slow, expensive, and highly subjective. This research evaluates the applicability of an automated pipeline to perform visual inspections of underground infrastructure for asset management purposes. It also analyses the benefits of using lightweight and low-cost hardware versus high-end technology. The aim is to increase the automation in performing such task to overcome the main drawbacks of the traditional regime. It replaces subjectivity, approximation and limited repeatability of the manual inspection with objectivity and consistent accuracy. Moreover, it reduces the overall end-to-end time required for the inspection and the associated costs. This might translate to more frequent inspections per given budget, resulting in increased service life of the infrastructure. Shorter inspections have social benefits as well. In fact, local communities can rely on a safe transportation with minimum levels of disservice. At last, but not least, it drastically improves health and safety conditions for the inspection engineers who need to spend less time in this hazardous environment. The proposed pipeline combines photogrammetric techniques for photo-realistic 3D reconstructions alongside with machine learning-based defect detection algorithms. This approach allows to detect and map visible defects on the tunnel’s lining in local coordinate system and provides the asset manager with a clear overview of the critical areas over all infrastructure. The outcomes of the research show that the accuracy of the proposed pipeline largely outperforms human results, both in three-dimensional mapping and defect detection performance, pushing the benefit-cost ratio strongly in favour of the automated approach. Such outcomes will impact the way construction industry approaches visual inspections and shift towards automated strategies

    AIforCOVID: predicting the clinical outcomes in patients with COVID-19 applying AI to chest-X-rays. An Italian multicentre study

    Full text link
    Recent epidemiological data report that worldwide more than 53 million people have been infected by SARS-CoV-2, resulting in 1.3 million deaths. The disease has been spreading very rapidly and few months after the identification of the first infected, shortage of hospital resources quickly became a problem. In this work we investigate whether chest X-ray (CXR) can be used as a possible tool for the early identification of patients at risk of severe outcome, like intensive care or death. CXR is a radiological technique that compared to computed tomography (CT) it is simpler, faster, more widespread and it induces lower radiation dose. We present a dataset including data collected from 820 patients by six Italian hospitals in spring 2020 during the first COVID-19 emergency. The dataset includes CXR images, several clinical attributes and clinical outcomes. We investigate the potential of artificial intelligence to predict the prognosis of such patients, distinguishing between severe and mild cases, thus offering a baseline reference for other researchers and practitioners. To this goal, we present three approaches that use features extracted from CXR images, either handcrafted or automatically by convolutional neuronal networks, which are then integrated with the clinical data. Exhaustive evaluation shows promising performance both in 10-fold and leave-one-centre-out cross-validation, implying that clinical data and images have the potential to provide useful information for the management of patients and hospital resources

    The Future of Cities

    Get PDF
    This report is an initiative of the Joint Research Centre (JRC), the science and knowledge service of the European Commission (EC), and supported by the Commission's Directorate-General for Regional and Urban Policy (DG REGIO). It highlights drivers shaping the urban future, identifying both the key challenges cities will have to address and the strengths they can capitalise on to proactively build their desired futures. The main aim of this report is to raise open questions and steer discussions on what the future of cities can, and should be, both within the science and policymaker communities. While addressing mainly European cities, examples from other world regions are also given since many challenges and solutions have a global relevance. The report is particularly novel in two ways. First, it was developed in an inclusive manner – close collaboration with the EC’s Community of Practice on Cities (CoP-CITIES) provided insights from the broader research community and city networks, including individual municipalities, as well as Commission services and international organisations. It was also extensively reviewed by an Editorial Board. Secondly, the report is supported by an online ‘living’ platform which will host future updates, including additional analyses, discussions, case studies, comments and interactive maps that go beyond the scope of the current version of the report. Steered by the JRC, the platform will offer a permanent virtual space to the research, practice and policymaking community for sharing and accumulating knowledge on the future of cities. This report is produced in the framework of the EC Knowledge Centre for Territorial Policies and is part of a wider series of flagship Science for Policy reports by the JRC, investigating future perspectives concerning Artificial Intelligence, the Future of Road Transport, Resilience, Cybersecurity and Fairness Interactive online platform : https://urban.jrc.ec.europa.eu/thefutureofcitiesJRC.B.3-Territorial Developmen

    Semantic segmentation of cracks: Data challenges and architecture

    No full text
    Deep Learning (DL) semantic image segmentation is a technique used in several fields of research. The present paper analyses semantic crack segmentation as a case study to review the up to date research on semantic segmentation in the presence of fine structures and the effectiveness of established approaches to address the inherent class imbalance issue. The established UNet architecture is tested against networks consisting exclusively of stacked convolution without pooling layers (straight networks), with regard to the resolution of their segmentation results. Dice and Focal losses are also compared against each other to evaluate their effectiveness on highly imbalanced data. With the same aim, dropout and data augmentation approaches are tested, as additional regularizing mechanisms, to address the uneven distribution of the dataset. The experiments show that the good selection of the loss function has more impact in handling the class imbalance and boosting the detection performance than all the other regularizers with regards to segmentation resolution. Moreover, UNet, the architecture considered as reference, clearly outperforms the networks with no pooling layers both in performance and training time. The authors argue that UNet architectures, compared to the networks with no pooling layers, achieve high detection performance at a very low cost in terms of training time. Therefore, the authors consider such architecture as the state of the art for semantic segmentation of cracks. On the other hand, once computational cost is not an issue anymore thanks to constant improvements of technology, the application of networks without pooling layers might become attractive again because of their simplicity of and high performance

    A tuning procedure for the electric networks of PEM systems

    No full text
    Damping of vibrations in piezo-electromechanical structures relies on a good design of the related electric networks. The most interesting solutions, from the complexity point of view, make use of two operational amplifiers for each piezo-electric transducer. Although the behavior of such circuits coincides with the ideal controller when the two operational amplifiers are assumed to be ideal, damping and stability issues arise when real active components are used. In this paper, it is shown how the actual performance of these circuits can be improved by modifying the interconnection among components. ©2010 IEEE

    Shapes classification of dust deposition using fuzzy kernel-based approaches

    No full text
    Dust deposition and pollution are relevant issues in indoor environments, especially concerning human health and conservation of things and works. In this framework, several tools have been proposed in the last years in order to analyze dust deposition and extract useful information for addressing the phenomenon. In this paper, a novel approach for dust analysis and classification is proposed, employing machine learning and fuzzy logic to set up a simple and actual tool. The proposed approach is tested and compared with other already introduced similar techniques, in order to evaluate its performance

    Stability analysis of optimal PEM networks

    No full text
    Previously, the present authors proposed an RC-active synthesis of the 'fourth-order line', which is an electrical controller for piezo-electro-mechanical systems. The advantage of that synthesis was reduced circuit complexity, although it may become unstable for some non-ideal behaviours. Proposed in this regard, is a new circuit having the same complexity but safer stability margins

    Adaptive resolution min-max classifiers

    No full text
    A high automation degree is one of the most important features of data driven modeling tools and it should be taken into consideration in classification systems design. In this regard, constructive training algorithms are essential to improve the automation degree of a modeling system. Among neuro-fuzzy classifiers, Simpson’s Min-Max networks have the advantage to be trained in a constructive way. The use of the hyperbox, as a frame on which different membership functions can be tailored, makes the Min-Max model a flexible tool. However, the original training algorithm evidences some serious drawbacks, together with a low automation degree. In order to overcome these inconveniences, in this paper two new learning algorithms for fuzzy Min-Max neural classifiers are proposed: the Adaptive Resolution Classifier (ARC) and its pruning version (PARC). ARC/PARC generates a regularized Min-Max network by a succession of hyperbox cuts. The generalization capability of ARC/PARC technique mostly depends on the adopted cutting strategy. By using a recursive cutting procedure (R-ARC and R-PARC) it is possible to obtain better results. ARC, PARC, R-ARC and R-PARC are characterized by a high automation degree and allow to achieve networks with a remarkable generalization capability. Their performances are evaluated through a set of toy problems and real data benchmarks. We also propose a suitable index that can be used for the sensitivity analysis of the classification systems under consideration
    • …
    corecore